Shape Recognition with Nearest Neighbor Isomorphic Network
نویسندگان
چکیده
The nearest neighbor isomorphic network paradigm is a combination of sigma-pi units in the hidden layer and product units in the output layer. Good initial weights can be found through clustering of the input training vectors, and the network can be successfully trained via back propagation learning. We show theoretical conditions under which the product operation can replace the Min operation. Advantages to the product operation are summarized. Under some sufficient conditions, the product operation yields the same classification result as the Min operation. We apply our algorithm to a geometric shape recognition problem and compare the performances with those of two other well-known algorithms. INTRODUCTION As pointed out by Duda and Hart [1] and Fukunaga [2], the nearest neighbor classifier (NNC) approximates the minimum error Bayesian classifier in the limit as the number of reference vectors gets large. When the feature vector joint probability density is unknown, the NNC would be the preferred classifier, except for two problems. First, a prohibitive amount of computation is required for its use. Second, the NNC’s performance is usually not optimized with respect to the training data. As more hardware for parallel processing becomes available, the first problem will be solved. Several neural networks which are isomorphic to NNC’s have been developed to attack the second problem. These include the learning vector quantization (LVQ) of Kohonen [3], the counter-propagation network of HechtNielsen [4], the adaptive-clustering network of Barnard and Casasent [5], and the nearest neighbor isomorphic network (NNIN) of Yau and Manry [6]. In this paper we discuss properties of product units that allow them to substitute for Min units, in the NNIN. We compare its performance to that of LVQ2.1 for the geometric shape recognition problem. 1 NETWORK TOPOLOGY AND TRAINING The NNIN, which is a back propagation (BP) network isomorphic to a type of NNC, uses fairly conventional sigma-pi units [7] in the hidden layer and units similar to the product units [8] in the output layer. A set of good initial weights and thresholds can be directly determined from the reference feature vectors via appropriate mapping equations. Each pattern is represented by a dimension-Nf feature vector, x = [x(1) x(2) x(Nf)]. As shown in Fig. 1, the first processing layer consists of Nc Ns sigma-pi units which are connected to the Nf input features. Here Nc is the number of classes and Ns is the number of clusters per class. Let Snet(i,j), Sθ(i,j), and Sout(i,j) denote the net input, threshold, and output of the jth unit of the ith class, respectively. Sw1(i,j,k) and Sw2(i,j,k) denote the connection weights from the kth input feature to the jth sigma-pi unit of the ith class. The sigma-pi unit net and activation are respectively The second processing layer is composed of product units [8] with each feature Snet(i,j) Sθ(i,j) Nf k 1 Sw1(i,j,k) x(k) Sw2(i,j,k) x (k) Sout(i,j) 1 1 exp Snet(i,j) raised to the first power. Let Pθ(i), Pnet(i) and Pout(i) denote the threshold, net input and output of the ith unit in the second layer. The Pw(i,î) are the connection weights for Pin(î) and Pnet(i). The NNIN can be initialized and trained as follows. Let rij = [rij(1), rij(2), Pin(i) Ns j 1 Sout(i,j) , Pnet(i) Pθ(i) Nc î 1 Pw(i,î) Pin(i) , Pout(i) 1 1 exp Pnet(i) , rij(Nf)] T and vij = [vij(1), vij(2), , vij(Nf)] T respectively be the mean and variance vectors of the jth cluster of the ith class. Define the squared distance of the vector x to rij as Comparing Snet(i,j) with Dij , we may assign the initial weights and threshold of D 2 ij Ng k 1 [x(k) rij(k)] 2 vij(k) the jth sigma-pi unit of the ith class as It is simple to initialize the second layer as Pw(i,î) = δ(i-î) and Pθ(i) = 0. Let Tout(i) be the desired output. Np denotes the total number of training
منابع مشابه
Identification of selected monogeneans using image processing, artificial neural network and K-nearest neighbor
Abstract Over the last two decades, improvements in developing computational tools made significant contributions to the classification of biological specimens` images to their correspondence species. These days, identification of biological species is much easier for taxonomist and even non-taxonomists due to the development of automated computer techniques and systems. In this study, we d...
متن کاملIterative improvement of a nearest neighbor classifier
In practical pattern recognition applications, the nearest neighbor classifier (NNC) is often applied because it does not require an a priori knowledge of the joint probability density of the input feature vectors. As the number of example vectors is increased, the error probability of the NNC approaches that of the Baysian classifier. However, at the same time, the computational complexity of ...
متن کاملClassification, with Applications to Object and Shape Recognition in Image Databases
Nearest neighbor retrieval is the task of identifying, given a database of objects and a query object, the objects in the database that are the most similar to the query. Retrieving nearest neighbors is a necessary component of many practical applications, in fields as diverse as computer vision, pattern recognition, multimedia databases, bioinformatics, and computer networks. At the same time,...
متن کاملPresentation of K Nearest Neighbor Gaussian Interpolation and comparing it with Fuzzy Interpolation in Speech Recognition
Hidden Markov Model is a popular statisical method that is used in continious and discrete speech recognition. The probability density function of observation vectors in each state is estimated with discrete density or continious density modeling. The performance (in correct word recognition rate) of continious density is higher than discrete density HMM, but its computation complexity is very ...
متن کاملIslamic Star Pattern Images Recognition by Central Moment Invariants
In this paper, a proposed system for classify Islamic geometric patterns with emphasis on representation and recognition stages is introduced. Finding a unique feature for classify IGPs is a hard work that hasn’t been done since today because of the diversity of their different structure. To implement this technique, we use shape based classification. Geometric central moments have been utilize...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1998